57 research outputs found

    Efficient Value of Information Computation

    Full text link
    One of the most useful sensitivity analysis techniques of decision analysis is the computation of value of information (or clairvoyance), the difference in value obtained by changing the decisions by which some of the uncertainties are observed. In this paper, some simple but powerful extensions to previous algorithms are introduced which allow an efficient value of information calculation on the rooted cluster tree (or strong junction tree) used to solve the original decision problem.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999

    Bayes-Ball: The Rational Pastime (for Determining Irrelevance and Requisite Information in Belief Networks and Influence Diagrams)

    Full text link
    One of the benefits of belief networks and influence diagrams is that so much knowledge is captured in the graphical structure. In particular, statements of conditional irrelevance (or independence) can be verified in time linear in the size of the graph. To resolve a particular inference query or decision problem, only some of the possible states and probability distributions must be specified, the "requisite information." This paper presents a new, simple, and efficient "Bayes-ball" algorithm which is well-suited to both new students of belief networks and state of the art implementations. The Bayes-ball algorithm determines irrelevant sets and requisite information more efficiently than existing methods, and is linear in the size of the graph for belief networks and influence diagrams.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998

    DAVID: Influence Diagram Processing System for the Macintosh

    Full text link
    Influence diagrams are a directed graph representation for uncertainties as probabilities. The graph distinguishes between those variables which are under the control of a decision maker (decisions, shown as rectangles) and those which are not (chances, shown as ovals), as well as explicitly denoting a goal for solution (value, shown as a rounded rectangle.Comment: Appears in Proceedings of the Second Conference on Uncertainty in Artificial Intelligence (UAI1986

    A Graph-Based Inference Method for Conditional Independence

    Full text link
    The graphoid axioms for conditional independence, originally described by Dawid [1979], are fundamental to probabilistic reasoning [Pearl, 19881. Such axioms provide a mechanism for manipulating conditional independence assertions without resorting to their numerical definition. This paper explores a representation for independence statements using multiple undirected graphs and some simple graphical transformations. The independence statements derivable in this system are equivalent to those obtainable by the graphoid axioms. Therefore, this is a purely graphical proof technique for conditional independence.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in Artificial Intelligence (UAI1991

    Three new sensitivity analysis methods for influence diagrams

    Full text link
    Performing sensitivity analysis for influence diagrams using the decision circuit framework is particularly convenient, since the partial derivatives with respect to every parameter are readily available [Bhattacharjya and Shachter, 2007; 2008]. In this paper we present three non-linear sensitivity analysis methods that utilize this partial derivative information and therefore do not require re-evaluating the decision situation multiple times. Specifically, we show how to efficiently compare strategies in decision situations, perform sensitivity to risk aversion and compute the value of perfect hedging [Seyller, 2008].Comment: Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010

    A Measure of Decision Flexibility

    Full text link
    We propose a decision-analytical approach to comparing the flexibility of decision situations from the perspective of a decision-maker who exhibits constant risk-aversion over a monetary value model. Our approach is simple yet seems to be consistent with a variety of flexibility concepts, including robust and adaptive alternatives. We try to compensate within the model for uncertainty that was not anticipated or not modeled. This approach not only allows one to compare the flexibility of plans, but also guides the search for new, more flexible alternatives.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996

    A Definition and Graphical Representation for Causality

    Full text link
    We present a precise definition of cause and effect in terms of a fundamental notion called unresponsiveness. Our definition is based on Savage's (1954) formulation of decision theory and departs from the traditional view of causation in that our causal assertions are made relative to a set of decisions. An important consequence of this departure is that we can reason about cause locally, not requiring a causal explanation for every dependency. Such local reasoning can be beneficial because it may not be necessary to determine whether a particular dependency is causal to make a decision. Also in this paper, we examine the graphical encoding of causal relationships. We show that influence diagrams in canonical form are an accurate and efficient representation of causal relationships. In addition, we establish a correspondence between canonical form and Pearl's causal theory.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence (UAI1995

    Evaluating influence diagrams with decision circuits

    Full text link
    Although a number of related algorithms have been developed to evaluate influence diagrams, exploiting the conditional independence in the diagram, the exact solution has remained intractable for many important problems. In this paper we introduce decision circuits as a means to exploit the local structure usually found in decision problems and to improve the performance of influence diagram analysis. This work builds on the probabilistic inference algorithms using arithmetic circuits to represent Bayesian belief networks [Darwiche, 2003]. Once compiled, these arithmetic circuits efficiently evaluate probabilistic queries on the belief network, and methods have been developed to exploit both the global and local structure of the network. We show that decision circuits can be constructed in a similar fashion and promise similar benefits.Comment: Appears in Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence (UAI2007

    Using Potential Influence Diagrams for Probabilistic Inference and Decision Making

    Full text link
    The potential influence diagram is a generalization of the standard "conditional" influence diagram, a directed network representation for probabilistic inference and decision analysis [Ndilikilikesha, 1991]. It allows efficient inference calculations corresponding exactly to those on undirected graphs. In this paper, we explore the relationship between potential and conditional influence diagrams and provide insight into the properties of the potential influence diagram. In particular, we show how to convert a potential influence diagram into a conditional influence diagram, and how to view the potential influence diagram operations in terms of the conditional influence diagram.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993

    A Decision-Based View of Causality

    Full text link
    Most traditional models of uncertainty have focused on the associational relationship among variables as captured by conditional dependence. In order to successfully manage intelligent systems for decision making, however, we must be able to predict the effects of actions. In this paper, we attempt to unite two branches of research that address such predictions: causal modeling and decision analysis. First, we provide a definition of causal dependence in decision-analytic terms, which we derive from consequences of causal dependence cited in the literature. Using this definition, we show how causal dependence can be represented within an influence diagram. In particular, we identify two inadequacies of an ordinary influence diagram as a representation for cause. We introduce a special class of influence diagrams, called causal influence diagrams, which corrects one of these problems, and identify situations where the other inadequacy can be eliminated. In addition, we describe the relationships between Howard Canonical Form and existing graphical representations of cause.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994
    • …
    corecore